Goto

Collaborating Authors

 Telehealth


Telehealth Abortion Is Still Possible Without Mifepristone

WIRED

Courts may restrict access to the popular abortion medication mifepristone in the United States. Telehealth providers have backup plans in place. Abortion provider Carafem's phones were ringing nonstop over the weekend after a US federal appeals court reinstated a nationwide requirement that the drug mifepristone, one of two pills used for a medication abortion, must be obtained in person. The decision, handed down on Friday, left patients unsure if they could gain access to their treatment through telehealth. "People are afraid, and they're angry," says Carafem's chief operations officer, Melissa Grant. "I had people contact us saying, .


US Supreme Court temporarily lifts ban on abortion pill mail delivery

Al Jazeera

The United States Supreme Court has temporarily reinstated a rule allowing an abortion pill to be prescribed through telemedicine and dispensed through the mail, lifting a judicial ban that narrowed access to the medication nationwide. Justice Samuel Alito issued an interim order on Monday, pausing for one week a decision by the New Orleans-based 5th US Circuit Court of Appeals to reimpose an older federal rule requiring an in-person clinician visit to receive mifepristone. The Supreme Court's action, called an "administrative stay", gives the justices more time to review emergency requests by two manufacturers of mifepristone to ensure that the drug can be provided via telehealth and the mail while the legal challenge plays out. Alito ordered Louisiana to respond to the drugmakers' requests by Thursday and indicated that the administrative stay would expire on May 11. The court would be expected to extend the interim stay or formally decide the requests by that time.


Amazon Health AI brings a doctor to your pocket

FOX News

Amazon Health AI is a new digital health assistant that answers medical questions, explains lab results and connects users with Amazon One Medical providers for care.





Amazon is adding AI-powered assistant to One Medical

Engadget

Bungie's Marathon arrives on March 5 How to claim Verizon's $20 outage credit The agentic Health AI will be integrated into the primary care provider's app. Dubbed'Health AI,' Amazon says the tool provides 24/7 personalized health guidance based on your medical records. The company says Health AI can explain lab results, help manage medications, and book appointments for patients. Amazon also says it can analyze images but doesn't specify whether this means medical imaging or user uploaded photos. While the company specifically says the tool complements, but does not replace, a patient's healthcare provider, it also vaguely says the AI can answer general and complex health questions while considering your unique health history.


DDXPlus: A New Dataset For Automatic Medical Diagnosis

Neural Information Processing Systems

There has been a rapidly growing interest in Automatic Symptom Detection (ASD) and Automatic Diagnosis (AD) systems in the machine learning research literature, aiming to assist doctors in telemedicine services. These systems are designed to interact with patients, collect evidence about their symptoms and relevant antecedents, and possibly make predictions about the underlying diseases. Doctors would review the interactions, including the evidence and the predictions, collect if necessary additional information from patients, before deciding on next steps. Despite recent progress in this area, an important piece of doctors' interactions with patients is missing in the design of these systems, namely the differential diagnosis. Its absence is largely due to the lack of datasets that include such information for models to train on. In this work, we present a large-scale synthetic dataset of roughly 1.3 million patients that includes a differential diagnosis, along with the ground truth pathology, symptoms and antecedents for each patient. Unlike existing datasets which only contain binary symptoms and antecedents, this dataset also contains categorical and multi-choice symptoms and antecedents useful for efficient data collection. Moreover, some symptoms are organized in a hierarchy, making it possible to design systems able to interact with patients in a logical way. As a proof-of-concept, we extend two existing AD and ASD systems to incorporate the differential diagnosis, and provide empirical evidence that using differentials as training signals is essential for the efficiency of such systems or for helping doctors better understand the reasoning of those systems.


The Role of Doctors Is Changing Forever

The New Yorker

Others say they don't need us. It's time for us to think of ourselves not as the high priests of health care but as what we have always been: healers. Not long ago, I cared for a middle-aged man I'll call Jim, who was generally healthy but had recently started to feel sluggish. One of his friends told him to try a hormone supplement. After Jim saw on social media that Robert F. Kennedy, Jr., the Trump Administration's Secretary of Health and Human Services, had endorsed supplements as a part of an "anti-aging" regimen, he ordered one from a telehealth company. A few months later, he noticed swelling and pain in his calf. ChatGPT warned him that he might have a blood clot.


Many-to-One Adversarial Consensus: Exposing Multi-Agent Collusion Risks in AI-Based Healthcare

Bashir, Adeela, han, The Anh, Shamszaman, Zia Ush

arXiv.org Artificial Intelligence

Abstract--The integration of large language models (LLMs) into healthcare IoT systems promises faster decisions and improved medical support. LLMs are also deployed as multi-agent teams to assist AI doctors by debating, voting, or advising on decisions. However, when multiple assistant agents interact, coordinated adversaries can collude to create false consensus, pushing an AI doctor toward harmful prescriptions. We develop an experimental framework with scripted and unscripted doctor agents, adversarial assistants, and a verifier agent that checks decisions against clinical guidelines. Using 50 representative clinical questions, we find that collusion drives the Attack Success Rate (ASR) and Harmful Recommendation Rates (HRR) up to 100% in unprotected systems. This work provides the first systematic evidence of collusion risk in AI healthcare and demonstrates a practical, lightweight defence that ensures guideline fidelity. Artificial intelligence (AI) is increasingly integrated into healthcare IoT systems, supporting tasks such as remote patient monitoring, diagnosis, and treatment recommendations. In this setting, ensuring the security and trustworthiness of AI decisions is critical, since medical errors caused by unsafe recommendations can severely harm patients [1]. However, AI doctors and LLM-based clinical decision agents face multiple vulnerabilities.